Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Life (Basel) ; 11(11)2021 Nov 22.
Artículo en Inglés | MEDLINE | ID: mdl-34833156

RESUMEN

(1) Background: The new SARS-COV-2 pandemic overwhelmed intensive care units, clinicians, and radiologists, so the development of methods to forecast the diagnosis' severity became a necessity and a helpful tool. (2) Methods: In this paper, we proposed an artificial intelligence-based multimodal approach to forecast the future diagnosis' severity of patients with laboratory-confirmed cases of SARS-CoV-2 infection. At hospital admission, we collected 46 clinical and biological variables with chest X-ray scans from 475 COVID-19 positively tested patients. An ensemble of machine learning algorithms (AI-Score) was developed to predict the future severity score as mild, moderate, and severe for COVID-19-infected patients. Additionally, a deep learning module (CXR-Score) was developed to automatically classify the chest X-ray images and integrate them into AI-Score. (3) Results: The AI-Score predicted the COVID-19 diagnosis' severity on the testing/control dataset (95 patients) with an average accuracy of 98.59%, average specificity of 98.97%, and average sensitivity of 97.93%. The CXR-Score module graded the severity of chest X-ray images with an average accuracy of 99.08% on the testing/control dataset (95 chest X-ray images). (4) Conclusions: Our study demonstrated that the deep learning methods based on the integration of clinical and biological data with chest X-ray images accurately predicted the COVID-19 severity score of positive-tested patients.

2.
PLoS One ; 16(6): e0251701, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34181680

RESUMEN

Differential diagnosis of focal pancreatic masses is based on endoscopic ultrasound (EUS) guided fine needle aspiration biopsy (EUS-FNA/FNB). Several imaging techniques (i.e. gray-scale, color Doppler, contrast-enhancement and elastography) are used for differential diagnosis. However, diagnosis remains highly operator dependent. To address this problem, machine learning algorithms (MLA) can generate an automatic computer-aided diagnosis (CAD) by analyzing a large number of clinical images in real-time. We aimed to develop a MLA to characterize focal pancreatic masses during the EUS procedure. The study included 65 patients with focal pancreatic masses, with 20 EUS images selected from each patient (grayscale, color Doppler, arterial and venous phase contrast-enhancement and elastography). Images were classified based on cytopathology exam as: chronic pseudotumoral pancreatitis (CPP), neuroendocrine tumor (PNET) and ductal adenocarcinoma (PDAC). The MLA is based on a deep learning method which combines convolutional (CNN) and long short-term memory (LSTM) neural networks. 2688 images were used for training and 672 images for testing the deep learning models. The CNN was developed to identify the discriminative features of images, while a LSTM neural network was used to extract the dependencies between images. The model predicted the clinical diagnosis with an area under curve index of 0.98 and an overall accuracy of 98.26%. The negative (NPV) and positive (PPV) predictive values and the corresponding 95% confidential intervals (CI) are 96.7%, [94.5, 98.9] and 98.1%, [96.81, 99.4] for PDAC, 96.5%, [94.1, 98.8], and 99.7%, [99.3, 100] for CPP, and 98.9%, [97.5, 100] and 98.3%, [97.1, 99.4] for PNET. Following further validation on a independent test cohort, this method could become an efficient CAD tool to differentiate focal pancreatic masses in real-time.


Asunto(s)
Páncreas/patología , Neoplasias Pancreáticas/diagnóstico , Adenocarcinoma/diagnóstico , Adenocarcinoma/patología , Diagnóstico por Computador/métodos , Diagnóstico Diferencial , Biopsia por Aspiración con Aguja Fina Guiada por Ultrasonido Endoscópico/métodos , Endosonografía/métodos , Humanos , Redes Neurales de la Computación , Neoplasias Pancreáticas/patología , Proyectos Piloto , Sensibilidad y Especificidad
3.
Medicina (Kaunas) ; 57(4)2021 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-33921597

RESUMEN

Background and Objectives: At present, thyroid disorders have a great incidence in the worldwide population, so the development of alternative methods for improving the diagnosis process is necessary. Materials and Methods: For this purpose, we developed an ensemble method that fused two deep learning models, one based on convolutional neural network and the other based on transfer learning. For the first model, called 5-CNN, we developed an efficient end-to-end trained model with five convolutional layers, while for the second model, the pre-trained VGG-19 architecture was repurposed, optimized and trained. We trained and validated our models using a dataset of ultrasound images consisting of four types of thyroidal images: autoimmune, nodular, micro-nodular, and normal. Results: Excellent results were obtained by the ensemble CNN-VGG method, which outperformed the 5-CNN and VGG-19 models: 97.35% for the overall test accuracy with an overall specificity of 98.43%, sensitivity of 95.75%, positive and negative predictive value of 95.41%, and 98.05%. The micro average areas under each receiver operating characteristic curves was 0.96. The results were also validated by two physicians: an endocrinologist and a pediatrician. Conclusions: We proposed a new deep learning study for classifying ultrasound thyroidal images to assist physicians in the diagnosis process.


Asunto(s)
Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Curva ROC , Glándula Tiroides/diagnóstico por imagen , Ultrasonografía
4.
Med Ultrason ; 23(2): 135-139, 2021 May 20.
Artículo en Inglés | MEDLINE | ID: mdl-33626114

RESUMEN

AIM: In this paper we proposed different architectures of convolutional neural network (CNN) to classify fatty liver disease in images using only pixels and diagnosis labels as input. We trained and validated our models using a dataset of 629 images consisting of 2 types of liver images, normal and liver steatosis. MATERIAL AND METHODS: We assessed two pre-trained models of convolutional neural networks, Inception-v3 and VGG-16 using fine-tuning. Both models were pre-trained on ImageNet dataset to extract features from B-mode ultrasound liver images. The results obtained through these methods were compared for selecting the predictive model with the best performance metrics. We trained the two models using a dataset of 262 images of liver steatosis and 234 images of normal liver. We assessed the models using a dataset of 70 liver steatosis im-ages and 63 normal liver images. RESULTS: The proposed model that used Inception v3 obtained a 93.23% test accuracy with a sensitivity of 89.9%% and a precision of 96.6%, and areas under each receiver operating characteristic curves (ROC AUC) of 0.93. The other proposed model that used VGG-16, obtained a 90.77% test accuracy with a sensitivity of 88.9% and a precision of 92.85%, and areas under each receiver operating characteristic curves (ROC AUC) of 0.91. CONCLUSION: The deep learning algorithms that we proposed to detect steatosis and classify the images in normal and fatty liver images, yields an excellent test performance of over 90%. However, future larger studies are required in order to establish how these algorithms can be implemented in a clinical setting.


Asunto(s)
Aprendizaje Profundo , Hígado Graso , Hígado Graso/diagnóstico por imagen , Humanos , Persona de Mediana Edad , Ultrasonografía
5.
Curr Health Sci J ; 46(2): 136-140, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32874685

RESUMEN

Due to the high incidence of skin tumors, the development of computer aided-diagnosis methods will become a very powerful diagnosis tool for dermatologists. The skin diseases are initially diagnosed visually, through clinical screening and followed in some cases by dermoscopic analysis, biopsy and histopathological examination. Automatic classification of dermatoscopic images is a challenge due to fine-grained variations in lesions. The convolutional neural network (CNN), one of the most powerful deep learning techniques proved to be superior to traditional algorithms. These networks provide the flexibility of extracting discriminatory features from images that preserve the spatial structure and could be developed for region recognition and medical image classification. In this paper we proposed an architecture of CNN to classify skin lesions using only image pixels and diagnosis labels as inputs. We trained and validated the CNN model using a public dataset of 10015 images consisting of 7 types of skin lesions: actinic keratoses and intraepithelial carcinoma/Bowen disease (akiec), basal cell carcinoma (bcc), benign lesions of the keratosis type (solar lentigine/seborrheic keratoses and lichen-planus like keratosis, bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi (nv) and vascular lesions (angiomas, angiokeratomas, pyogenic granulomas and hemorrhages, vasc).

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...